Round Overview
Discuss this match
Let’s focus on the very first element of the list: `a[0]`, if this element is `3`, then the first 3 people in the list all come from the same country. This also means that `a[0]`, `a[1]` and `a[2]` must exist and each be 3. If the vector had less than 3 elements or if one of `a[0]`, `a[1]` and `a[2]` was different than 3, then we would have a contradiction and the result is -1. Else we know the first 3 elements represent a country and we have to move on to the next country.
We need to process a new country the fourth element: `a[3]`. Let’s make it more : we have `i = 3` as the starting index of the next country. If `i` was too large: `(i >= n)` then there is no next country to process , all the people were already processed. Otherwise, we have `x = a[i]`, this means that the next `x` people in the list all come from the same country. We should increment the country count. `a[i], a[i+1], … , a[i+x-1]` must each be `x`. This also means that `(i + x <= n)`. If any of these conditions is false, we have a contradiction and the result is -1. Otherwise, we do `i = i + x` and we can repeat this logic until we find a contradiction or we reach the end.
That previous algorithm, implemented in c++ code:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
int solve(vector < int > a) {
int n = a.size();
int i = 0;
int c = 0; // Country code:
while (i < n) {
c++;
int x = a[i];
if (i + x > n) {
// not enough people
return -1;
}
// Each of the next x people must come from the same country and thus
// their answers must each be x
for (int j = i; j < i + x; j++) {
if (a[j] != x) {
return -1;
}
}
// move the index x units forward
i += x;
}
return c;
}
We can also implement the algorithm with a recursion, like in this python code:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class CountryGroup:
def solve(self, a):
INF = 10 ** 9 # a very large number, to mark result of f() as invalid
def f(a):
if len(a) == 0:
return 0
elif a[: a[0]] != [a[0]] * a[0]:
return INF
else:
return 1 + f(a[a[0]: ])
r = f(a)
return -1
if r >= INF
else r
RockPaperScissorsMagicEasy
Problem Details
An observation that, although not mandatory, can really simplify the problem: It doesn’t matter what card is in each position, only the number of cards. For the score, we only need to care about the number of games Alive’s wins. For each of Bob’s cards, no matter if it is rock, paper or scissors, there are 3 possibilities:
Alice can choose a winning card: If Bob had rock, Alice can pick paper, if Bob had paper, Alice can pick Scissors, etc. This gives Alice a point.
Alice can choose a losing card: If Bob had rock, Alice can pick scissors. This gives Alice zero points.
Alice can choose a tie: Just choose the same card Bob picked. Zero points.
So instead of worrying about picking Rock, Paper or Scissors in each card index, we can worry about picking: Win, Lose or Tie.
Alice wants exactly `s` (score) points. This means that Alice must win exactly `s` cards. The remaining cards must be lose or tie, it doesn’t matter. We can first calculate the number of ways to pick `s` cards then the number of ways to pick lose OR tie out of the remaining `n - s` cards. These two events are independent, so we multiply the two results together for a final result.
The number of ways to pick `s` cards out of `n` is, by definition: `C(n,s)` or `((n),(s))` (binomial coefficient.
The number of ways to pick lose or tie out of `n - s` cards is `2^(n-s)`.
Therefore, the final result is `((n),(s)) * 2^(n-s)`
Problems that ask you to return the result modulo 1,000,000,007 (or a similar number) usually do so to allow us to solve problems that have large results without actually using those large numbers in calculations. We can handle these numbers with modular arithmetic. Read this recipe for more info.
There is only a small issue, we need to calculate `((n),(s)) = (n!) / ( s! (n-s)!)`. Modular arithmetic allows us to calculate the modulo of factorials and powers easily. The issue comes when doing division, division is not so simple in modular arithmetic.
One work around is to use Pascal’s triangle to calculate the coefficient. Pascal’s triangle is based on simple additions so we can use modular arithmetic. The trade-off is that the complexity becomes `O(n^2)` instead of `O(n)`. Which is not an issue for this problem’s constraints. C++ code:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
const int MOD = 1000000007;
int count(vector < int > card, int score) {
int n = card.size();
if (score > n) {
return 0;
}
// The result is C(n, score) * 2 ^ (n - score)
vector < vector < int >> C(n + 1, vector < int > (n + 1));
for (int i = 0; i <= n; i++) {
C[i][0] = 1;
for (int j = 1; j <= i; j++) {
C[i][j] = (C[i - 1][j - 1] + C[i - 1][j]) % MOD;
}
}
// multiply by 2 a total of (n - score) times
int x = C[n][score];
for (int i = 0; i < n - score; i++) {
x = (2 * x) % MOD;
}
return (int) x;
}
We could also do something like finding the modular multiplicative inverse of the quotient of the divisor.
You can also use big numbers to do the calculations and apply the modulo at the end. Python switches to big numbers automatically once integer size limit is beaten:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
MOD = 1000000007 from math import factorial as f class RockPaperScissorsMagicEasy: def count(self, card, score): n = len(card) if score > n: return 0 def C(n, k): return f(n) / f(k) / f(n - k) return (C(n, score) * 2 ** (n - score)) % MOD
Let’s try a recursive approach: We try each index in increasing order. For `p = 0`, we decide whether or not to add `x[0]` to Alice’s pitch list: `A` or Bob’s pitch list: `B`. In this case, both lists are empty, so this doesn’t add a cost yet. The state does change:
We move to the next element: `p’ = 0 + 1`
If we chose `A` then `A` becomes = `{ x[0] }`
If we chose `B`, `B` becomes = `{ x[0] }`
And so and so, let’s try the general case: We have a `p` for the position and we know the contents of `A` and `B`.
Decide the list where `x[p]` should go: `A` or `B`.
If we choose `A`, then we first check: Is `A` empty? If `A` is empty, the cost does not increase. If `A` is non empty, then we will add `x[p]` next to the last element of `A` : `A[ |A| - 1]`. This increases the cost by `|x[p] - A[ |A| - 1] |`. The state changes so that `A’` now includes `x[p]` as the last element of `A`.
Choosing `B` has a similar logic.
`p` becomes `p+1`, and we should move forward to calculate the remaining costs.
The interesting thing here is that, in order to calculate the costs for indexes greater than or equal to `p`, we only need to know the last elements of `A` and `B` before we use index `p`. The remaining elements don’t affect this result. So let’s craft a function `f(p, a, b)` that finds the cost of choosing the lists for indexes greater than or equal to `p` such that the current last elements of `A` and `B` are `a` and `b`, respectively. We also nee a way to tell if the sets are empty, if `a = -1` we will know `A` is empty and the same happens with `B` if `b = -1`.
If `p = n`, there are no more pitches the result is 0.
Else we decide to add it to `A` or `B`.
If we add it to `A` , we calculate the cost `c` this adds. If `A` is empty (`a = -1`), then there is no cost. Else the cost is `|a - x[p]|`. The change in `A` makes its new last element `a’` become `x[p]`. The total cost is: `c + f(p+1, x[p], b)`.
For `B` we have a similar case. Find cost `c` then: `c + f(p+1, a, x[p])`.
Pick the minimum to find the best result.
The overall solution is `f(p = 0, -1, -1)`, the two lists start empty and we begin with the first element.
This recurrence is acyclic. Solving it with dynamic programming might be complicated because `a` and `b` can be up to a million. However, it is quite possible that if you implement this recurrence using memoization, it might pass in time depending on the amount of overhead used. This might be surprising unless we notice that this function, when memoized will need `O(n^2)` time and memory…
We would prefer to be more certain of a solution before implementing it. Why is that `O(n^2)`? There’s an easy first step to notice: The values of `a` and `b` are either `-1` or one of `O(n)` elements of the input. So we can, instead of using `a` and `b` use indexes `i` and `j` for the last index of the array that we added to `a` and `b`, respectively. From this we can see that the complexity order is not worse than cubic. There are `O(n^3)` possible tuples `(p, i, j)`.
The key to reducing one factor is to notice that one of `a` or `b` is almost always going to be equal to `x[p-1]`, because we do either `c + f(p+1, a, x[p])` or `c + f(p+1, x[p], b)`. The only exception is when `p = 0`, but this is just a special case, and we can just assume the first element of the list always goes to `A` so `f(0, -1, -1) = f(1, x[0], -1)` . Now we can always assume that either `a` or `b` will be equal to `x[p-1]`. Count the number of tuples `(p, x[p-1], j)` and `(p, i, x[p-1])`, each is `O(n^2)`.
This knowledge allows us to implement the recurrence in a simpler way. Since one of the last elements of the two sets is always `x[p-1]`, we can change the function to `f(p, i)`, where we assume that one of the sets contains `x[p-1]` as the last index and the other set contains `x[i]` as the last index. Note that in this approach we swap the sets from current and the other as necessary. This implementation saves plenty of overhead and makes it evident the time and memory complexity are `O(n^2)`.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
int n;
vector < int > pitch;
int dp[2001][2001];
int f(int p, int lastB) {
int & res = dp[p][lastB];
if (res == -1) {
if (p >= n) {
res = 0;
} else {
// Choose A:
res = abs(pitch[p] - pitch[p - 1]) + f(p + 1, lastB);
// Choose B:
int b;
if (lastB == n) {
//not chosen before
b = f(p + 1, p - 1);
} else {
// chosen before, swap the sets:
b = abs(pitch[p] - pitch[lastB]) + f(p + 1, p - 1);
}
res = std::min(res, b);
}
}
return res;
}
int solve(vector < int > pitch) {
this - > pitch = pitch;
memset(dp, -1, sizeof(dp));
n = pitch.size();
return f(1, n);
}
CountryGroupHard
Problem Details
If the information is sufficient to have a unique scenario, this would mean that there is exactly 1 way to fill the unknown values. We can try and transform the problem into a counting problem. If there is more than one way to fill the values, the result is “Insufficient”
Let’s try with an example: `{3, 0, 3, 0, 0, 3, 0, 0, 2, 0}`
The first element is `3`, which means that the list must begin with 3 people from the same country. The first 3 elements must be either 0 or 3. If not, then this case would be invalid. Else we count the number of ways to fill the remaining positions: `{0, 0, 3, 0, 0, 2, 0}`, that will be our result.
So let’s solve `{0, 0, 3, 0, 0, 2, 0}`. This time, we can pick a value for the first answer. This will determine how many of the first elements belong to the same country. We can try with `j = 1`. For `j = 1` to be valid, it would mean that the first element is a person of a country that has 1 person in total. This is possible because the result for this single person is unknown. So we will need to know the result for `{0, 3, 0, 0, 2, 0}.
For `j = 2`, the first two elements must belong to the same country. This is possible - Both are 0. So we should solve `{3, 0,0, 2, 0}` to find the number of results in this case.
For `j = 3`, the first 3 people are from the same country. This is valid because the only non-0 answer in this group of 3 people is also 3. The result is the same result as `{0, 0, 2, 0}` in this case.
`j = 4` would be impossible, one of the answers in the group of the first 4 people is 3, this means we cannot have a country of 4 members in this area.
Using this knowledge we can make a recurrence relation. `f(V)` returns the number of ways to fill the missing values in a vector `V`. Note that the vector is always a suffix of the original vector. So we can actually represent it by a single index `i`, meaning that we need to solve this for the sub-vector that starts at index `i`.
If `V[i] = 0`, the first element of the subvector (beginning at index `i`) is unknown, so we can try each `j = 1, 2, 3, … 100`, verify if the first `j` numbers of the subvector are valid: each should be either 0 or `j`. If they are then `f(i + j)` solves the remaining part of the problem. Depending on the situation, This will represent the addition of various cases, one for each `j`.
If `V[i] != 0`, then we need exactly `j = V[i]` members of this country. There need to be enough people for this: `i + j <= n`. We also need to make sure filling all `V[k]` with `i <= k < i+j` with `j` is valid. This means that all elements must be 0 or `j`. If these conditions are true, `f(i + j)` solves the problem, else there are 0 ways to do the task.
As a base case, when `i = n`, there are no more people and there is one valid way to fill an empty set: Do nothing. The result is 1.
We can use iterative linear programming to implement `f(i)`. There is only a small issue, when `n >= 63`, the result might not fit a 64 bits integer. You can verify this by trying to estimate the maximum result for `n = 100`. If you try `{0}` then `{0,0}` then `{0,0,0}` , … you will notice the results are powers of 2. For `n = 100`, the result can be `2^99`. A demonstration of this is to imagine choosing to add or not country limits between consecutive people. So the result might be too large for 64 bits integers.
We can workaround this by noticing we do not need to find the actual result, only to know if it is greater than 1. So whenever `f[i] > 2`, resetting it to `f[i] = 2` will allow us to solve the problem while keeping the results low.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
string solve(vector < int > a) {
int n = a.size();
int f[n + 1];
f[n] = 1;
for (int i = n - 1; i >= 0; i--) {
f[i] = 0;
if (a[i] == 0) {
//we don't know how much, pick one.
int must = -1;
for (int j = 1;
(i + j <= n); j++) {
int x = a[i + j - 1];
if (x != 0) {
if ((must != -1) && (must != x)) {
break;
} else {
must = x;
}
}
if ((must == -1) || (must == j)) {
f[i] += f[i + j];
}
}
} else {
// already know how many belong to this country:
int j = a[i];
bool bad = (i + j > n);
for (int k = i; k < n && k < i + j; k++) {
if ((a[k] != 0) && (a[k] != j)) {
bad = true;
}
}
if (!bad) {
f[i] = f[i + j];
}
}
// avoid unneeded overflow
if (f[i] >= 2) {
f[i] = 2;
}
}
assert(f[0] != 0);
return (f[0] > 1) ? "Insufficient" : "Sufficient";
}
This is a minimum cut problem. To reach the conclusion maybe we need to picture the pitches as a graph. Imagine each pitch from 1 to `N` being a vertex. There are relationships between pitches, let’s create edges between two pitches that are consecutive in the input. How about we write the number of times the two pitches are adjacent. This would be the cost we spend in case the two pitches are assigned to the same person. For example, this would be example 3: `N=10`, pitches = `{1,4,3,5,2,5,7,5,9}`.
This is a good representation of the cost to make two adjacent vertices have different owners, but we also need a restriction, something that would force us to make some vertices belong to Alice and some to Bob. Otherwise we would just make all vertices have the same owner and the cost is 0.
In the way the input is defined: Alice can only use pitches from `low` to `N`, this means that Bob is forced to use pitches lower than `low`. Likewise, Alice must use pitches higher than `high`. These are the restrictions we wanted. In the above example, we have `low = 4` and `high = 5`. How about we imagine two vertices representing Alice and Bob and connect them to the pitches they are forced to sign:
This is in fact, the solution to the problem. The cost to make `B` have a different owner to its adjacent vertices is infinite, because we are not allowed to do it. The same with `A`. However, not all vertices can be equal, so we need to cut all paths between `A` and `B`. The result after cutting will be two distinct components that can each be owned by the respective person without contradictions. So we want to find the minimum cost to do this cut. This is the minimum cut problem which can be reduced to maximum flow. `B` would be turned into the source and `A` the sink of a max flow problem.
Let’s solve this case. What is the minimum weight cut, the minimum sum of weights of removed edges in this graph so that `A` and `B` become disconnected?
The answer is 3. And we can tell which pitch to assign to each person after knowing how to cut the graph.
There are some corner cases to the logic we used. What if there are vertices or whole components that are not initially connected to `A` or `B`? In that case those whole components can really be of any owner. Another question that might arise is what happens if after the cut, there are 3 disjoint components and not 2? If we ignore any components that were initially disconnected from both `A` and `B`, then it is impossible to have more than 3 components after the cut, because it is a minimum cut. If there are 3 components after the cut, it means there is at least one edge that was unnecessarily cut.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
/* The not-included MaxFlow solver works as follows:
A network class with a source, a sink vertex and the following methods:
addVertex() : Adds a new vertex to the graph, returns a unique integer
addEdge(u, v, c, undirected) creates an edge between u and v with
capacity c. If undirected is true, the edge is undirected.
maxFlow() returns maximum flow of network
*/
int solve(int N, int low, int high, vector < int > pitch) {
// instance a network class:
unique_ptr < MaxFlow::network > g(new MaxFlow::network());
// add the vertices:
g - > source = g - > addVertex();
for (int i = 1; i <= N; i++) {
g - > addVertex();
}
g - > sink = g - > addVertex();
// vertices 1 to N are pitches
// Bob's forced vertices:
for (int i = 1; i < low; i++) {
g - > addEdge(g - > source, i, MaxFlow::INF);
}
// Alice's forced vertices:
for (int i = high + 1; i <= N; i++) {
g - > addEdge(i, g - > sink, MaxFlow::INF);
}
// Adjacent ones:
for (int i = 0; i + 1 < pitch.size(); i++) {
if (pitch[i] != pitch[i + 1]) {
g - > addEdge(pitch[i], pitch[i + 1], 1, true);
}
}
// return the max flow, note that unique_ptr will delete g when scope ends
return g - > maxFlow();
}
RockPaperScissorsMagic
Problem Details
Let `c_i` be the number of cards Bob played of type `i`, for `0 \le i \le 2`.
First, let’s make a simple observation: if Alice plays a rock and a paper against two of Bob’s 0 cards, it doesn’t matter whether she plays the rock or the paper first; the final score will be the same. Therefore, the only thing that matters for the final score is which cards are matched against each one of Bob’s card types. In other words, we can describe Alice’s play by the 9 integer tuple `P = (R_0, S_0, P_0, R_1, S_1, P_1, R_2, S_2, P_2)` (where `R_0` is the number of rock cards Alice played against type-0 cards, and similarly for the other variables). We have the condition `R_i + S_i + P_i = c_i` for all `0 \le i le 2`.
Let’s denote by `T_R(R, S, P)` the score obtained if you play `R` rocks, `S` scissors and `P` papers against Bob’s cards of type rock. In mathematical terms, we have `T_R(R, S, P) = R*tie + S*wi\n + P*lose`. Define `T_S` and `T_P` similarly.
For the magic trick to work, we need to be able to assign any permutation of rock, paper, scissors to Bob’s cards of type 0, type 1 and type 2 and the score should be the same. One of these permutations is 0 = Rock, 1 = Scissors, 2 = Paper, which gives Alice a final score of `T_R(R_0, S_0, P_0) + T_S(R_1, S_1, P_1) + T_P(R_2, S_2, P_2)`.
By the condition for the magic trick, the result should be the same if we swap two assignments in our permutation. So if we have the permutation 0 = Rock, 1 = Paper, 2 = Scissors, the score has to be equal to the previous one. Or, in an equation:
(score for 0 = Rock, 1 = Scissors, 2 = Paper) = (score for 0 = Rock, 1 = Paper, 2 = Scissors)
`T_R(R_0, S_0, P_0) + T_S(R_1, S_1, P_1) + T_P(R_2, S_2, P_2) = T_R(R_0, S_0, P_0) + T_P(R_1, S_1, P_1) + T_S(R_2, S_2, P_2)`
`T_S(R_1, S_1, P_1) - T_P(R_1, S_1, P_1) = T_S(R_2, S_2, P_2) - T_S(R_2, S_2, P_2)`
This is an important relation, because it tells us the condition that has to be satisfied for the trick to work with this specific swap. Remembering that the permutation and the swap we chose was arbitrary, we can generalize this result for all possible swaps by the result:
`T_x(R_i, S_i, P_i) - T_y(R_i, S_i, P_i) = T_x(R_j, S_j, P_j) - T_y(R_j, S_j, P_j)`
What this essentially tells us is that `T_x(R_i, S_i, P_i) - T_y(R_i, S_i, P_i)` has to be the same constant for all `i`. By performing specific swaps, we can show that this condition is necessary, but is it sufficient? It is so, because we can obtain any permutation of Bob’s card types by starting with some permutation and doing some swaps, and the validity of the condition shows that the total score will be preserved after each swap.
We can simplify the 6 possibilities for `(x,y)` by noting that most of them are redundant. In fact, if `C_1 = T_S(R_i, S_i, P_i) - T_R(R_i, S_i, P_i)` and `C_2 = T_P(R_i, S_i, P_i) - T_R(R_i, S_i, P_i)` have the same value for all `i`, it is clear that `T_x(R_i, S_i, P_i) - T_y(R_i, S_i, P_i)` is also a constant for any choice of `(x,y)`:
`T_S(R_i, S_i, P_i) - T_R(R_i, S_i, P_i) = C_1`
`T_R(R_i, S_i, P_i) - T_S(R_i, S_i, P_i) = -C_1`
`T_P(R_i, S_i, P_i) - T_R(R_i, S_i, P_i) = C_2`
`T_R(R_i, S_i, P_i) - T_P(R_i, S_i, P_i) = -C_2`
`T_S(R_i, S_i, P_i) - T_P(R_i, S_i, P_i) = (T_S(R_i, S_i, P_i) - T_R(R_i, S_i, P_i)) - (T_P(R_i, S_i, P_i) - T_R(R_i, S_i, P_i)) = C_1 - C_2`
`T_P(R_i, S_i, P_i) - T_S(R_i, S_i, P_i) = (T_P(R_i, S_i, P_i) - T_R(R_i, S_i, P_i)) - (T_S(R_i, S_i, P_i) - T_R(R_i, S_i, P_i)) = C_2 - C_1`
One important property of our relation is that it cares about two constants `C_1 = T_S(R_i, S_i, P_i) - T_R(R_i, S_i, P_i)` and `C_2 = T_P(R_i, S_i, P_i) - T_R(R_i, S_i, P_i)` that can be calculated knowing `R_i, S_i, P_i` for a single value of `i`. This, coupled with the fact that the picks we make for a certain card type don’t influence the picks we can make for other types, leads to the question of whether we can somehow exploit this to process each card type separately.
For each card type `i`, we can count the number of ways to choose cards to obtain constants `(C_1, C_2)` easily: since there are at most `bi\nom(1002,2) = 501501` possibilities to pick `R_i, S_i, P_i`, we can just try them all. An important point to note is that we haven’t been caring about order to calculate scores up to this point, but it matters when counting, because different orders count as different combinations. So, when counting, we have to take into account that the cards that will play against a specific one of Bob’s card types can be played in any order. Therefore, we need to take each `(R_i, S_i, P_i)` triplet as `\frac{(R_i+S_i+P_i)!}{R_i!S_i!P_i!}` distinct combinations.
Remembering that the picks we make for a certain card type don’t influence the picks we can make for other types, the only restriction we have for joining the separate results is the magic trick condition. So we can merge the results for every card type by simply multiplying the results for every pair of constants `(C_1, C_2)`. The total number of combinations for all constant pairs after the merge is the answer we want.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
map < pair < int, int > , int > mm[3];
int cnt[1100];
long long fac[1100];
long long invfac[1100];
const int mod = 1000000007;
long long modpow(int b, int e) {
if (e == 0) return 1;
if (e == 1) return b;
long long t = modpow(b, e / 2);
t = (t * t) % mod;
if (e & 1) t = (t * b) % mod;
return t;
}
int RockPaperScissorsMagic::count(int win, int lose, int tie, vector < int > card) {
// Precompute factorials / inverse factorials
fac[0] = invfac[0] = 1;
for (int i = 1; i <= 1010; i++) {
fac[i] = (fac[i - 1] * i) % mod;
invfac[i] = modpow(fac[i], mod - 2);
}
// Count how many cards of each type Bob used
for (int i = 0; i < (int) card.size(); i++) {
cnt[card[i]]++;
}
// For all card types...
for (int i = 0; i < 3; i++) {
// Try all rock, scissors, paper combinations
for (int rck = 0; rck <= cnt[i]; rck++) {
for (int scs = 0; scs <= cnt[i] - rck; scs++) {
int pap = cnt[i] - rck - scs;
// Calculate constants for this combination
int TR = rck * tie + scs * win + pap * lose;
int TS = rck * lose + scs * tie + pap * win;
int TP = rck * win + scs * lose + pap * tie;
int C1 = TS - TR;
int C2 = TP - TR;
// th is the number of distinct orders for this (Ri,Si,Pi) triple
// th = (rck+scs+pap)! / rck!scs!pap!
long long th = (fac[cnt[i]] * invfac[rck]) % mod;
th = (th * invfac[scs]) % mod;
th = (th * invfac[pap]) % mod;
// Record result
mm[i][make_pair(C1, C2)] = (mm[i][make_pair(C1, C2)] + th) % mod;
}
}
}
// Merge individual results by multiplying results
// with same values of (C1, C2)
long long ans = 0;
int m = min_element(cnt, cnt + 3) - cnt;
for (auto p: mm[m]) {
long long th = 1;
for (int ot = 0; ot < 3; ot++) {
if (mm[ot].count(p.first)) {
th *= mm[ot][p.first];
} else {
th = 0;
}
th %= mod;
}
// Sum total possibilities for (C1, C2) to final answer
ans = (ans + th) % mod;
}
return ans;
}